Hierarchical Point Process Models for Recurring Safety Critical Events for Commercial Truck Drivers
This website serves as the supplementary materials for the manuscript titled “Hierarchical Point Process Models for Recurring Safety Critical Events for Commercial Truck Drivers”, which is under review at Journal of the American Statistical Association.
Two models are proposed in this paper:
The first section shows the R code to simulate data following the NHPP and JPLP data generating process. Only 10 drivers are assumed to minimize the simulation and Bayesian estimation time. Then, Stan code to perform Bayesian hierarchical PLP and JPLP estimation is presented in the next section. In the third section, R code for performing Hamiltonian Monte Carlo for the simulated data are demonstrated. In the last section, we present the R code to get summary statistics (posterior mean, 95% credible interval, Gelman-Rubin statistics \(\hat{R}\), and effective sample size ESS) from the posterior distribution are shown.
pacman::p_load(dplyr, rstan, broom)
source('Functions/NHPP_functions.R')
set.seed(123)
df_NHPP = sim_hier_nhpp(D = 10, beta = 1.2)
str(df_NHPP$hier_dat)
List of 9
$ N : int 255
$ K : num 3
$ S : int 95
$ D : int 10
$ id : int [1:95] 1 1 1 1 1 1 1 1 1 1 ...
$ tau : num [1:95] 11.09 9.62 10.42 8.22 10.38 ...
$ event_time : num [1:255] 1.04 3.29 4.15 5.16 6.49 ...
$ group_size : int [1:95] 8 7 1 0 6 16 5 2 3 8 ...
$ X_predictors:'data.frame': 95 obs. of 3 variables:
..$ x1: num [1:95] 0.6651 -0.0857 0.9146 2.0706 0.8546 ...
..$ x2: num [1:95] 1.9446 0.0545 1.0107 2.6988 1.0172 ...
..$ x3: int [1:95] 0 3 3 1 1 1 2 4 2 2 ...
NHPP is estimated at shift-level. Here are the definition of the simulated NHPP data passed to Stan:
source('Functions/JPLP_functions.R')
set.seed(123)
df_JPLP = sim_hier_JPLP(D = 10, beta = 1.2)
str(df_JPLP$stan_dt)
List of 11
$ N : int 162
$ K : num 3
$ S : int 331
$ D : num 10
$ id : int [1:331] 1 1 1 1 1 1 1 1 1 1 ...
$ r_trip : int [1:331] 1 2 3 4 5 1 2 3 4 1 ...
$ t_trip_start: num [1:331] 0 1.66 4.71 6.44 9.16 0 2.94 5.29 6.79 0 ...
$ t_trip_end : num [1:331] 1.66 4.71 6.44 9.16 11.09 ...
$ event_time : num [1:162] 2.71 4.57 5.7 6.27 6.66 ...
$ group_size : int [1:331] 0 2 2 1 0 1 1 1 0 1 ...
$ X_predictors: num [1:331, 1:3] 0.665 0.665 0.665 0.665 0.665 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:331] "1" "1.1" "1.2" "1.3" ...
.. ..$ : chr [1:3] "x1" "x2" "x3"
In contrast to the NHPP, JPLP is estimated at trip level. Here are the definition of the simulated JPLP data passed to Stan:
Note that the trip start and end time are counted from the beginning of the shift, and the rest time is excluded from calculation.
The Stan codes for both models are written using self-define likelihood functions. These likelihood functions have been derived in the manuscript.
stan_NHPP =
'// Stan code to estimate a hierchical PLP process
functions{
real nhpp_log(vector t, real beta, real theta, real tau){
vector[num_elements(t)] loglik_part;
real loglikelihood;
for (i in 1:num_elements(t)){
loglik_part[i] = log(beta) - beta*log(theta) + (beta - 1)*log(t[i]);
}
loglikelihood = sum(loglik_part) - (tau/theta)^beta;
return loglikelihood;
}
real nhppnoevent_lp(real tau, real beta, real theta){
real loglikelihood = - (tau/theta)^beta;
return(loglikelihood);
}
}
data {
int<lower=1> N; // total # of failures
int<lower=1> K; // number of predictors
int<lower=1> S; // total # of shifts
int<lower=1> D; // total # of drivers
int<lower=1> id[S]; // driver index, must be an array
vector<lower=0>[S] tau; // truncated time
vector<lower=0>[N] event_time; // failure time
int group_size[S]; // group sizes
matrix[S, K] X_predictors; // predictor variable matrix
}
transformed data{
matrix[S, K] X_centered;
vector[K] X_means;
for(k0 in 1:K){
X_means[k0] = mean(X_predictors[, k0]);
X_centered[,k0] = X_predictors[, k0] - X_means[k0];
}
}
parameters{
real mu0; // hyperparameter: mean
real<lower=0> sigma0; // hyperparameter: s.e.
real<lower=0> beta; // shape parameter
vector[K] R1_K; // fixed parameters each of K predictors
vector[D] R0; // random intercept for each of D drivers
}
model{
int position = 1;
vector[S] theta_temp;
for (s0 in 1:S){
theta_temp[s0] = exp(R0[id[s0]] + X_centered[s0,]*R1_K);
}
for (s1 in 1:S){
if(group_size[s1] == 0) {
target += nhppnoevent_lp(tau[s1], beta, theta_temp[s1]);
}else{
segment(event_time, position, group_size[s1]) ~ nhpp(beta, theta_temp[s1], tau[s1]);
position += group_size[s1];
}
}
beta ~ gamma(1, 1);
R0 ~ normal(mu0, sigma0);
R1_K ~ normal(0, 10);
mu0 ~ normal(0, 10);
sigma0 ~ gamma(1, 1);
}
generated quantities{
real mu0_true = mu0 - dot_product(X_means, R1_K);
vector[D] R0_true = R0 - dot_product(X_means, R1_K);
//real theta_correct = theta_temp - dot_product(X_centered, R1_K);
}
'
stan_JPLP =
'
functions{
// LogLikelihood function for shifts with events (N_{event} > 0)
real jplp_log(vector t_event, // time of SCEs
real trip_start,
real trip_end,
int r,// trip index
real beta,
real theta,
real kappa)
{
vector[num_elements(t_event)] loglik;
real loglikelihood;
for (i in 1:num_elements(t_event))
{
loglik[i] = (r - 1)*log(kappa) + log(beta) - beta*log(theta) + (beta - 1)*log(t_event[i]);
}
loglikelihood = sum(loglik) - kappa^(r - 1)*theta^(-beta)*(trip_end^beta - trip_start^beta);
return loglikelihood;
}
// LogLikelihood function for shifts with no event (N_{event} = 0)
real jplpoevent_lp(real trip_start,
real trip_end,
int r,
real beta,
real theta,
real kappa)
{
real loglikelihood = - kappa^(r - 1)*theta^(-beta)*(trip_end^beta - trip_start^beta);
return(loglikelihood);
}
}
data {
int<lower=0> N; //total # of events
int<lower=1> D; //total # of drivers
int<lower=1> K; //number of predictors
int<lower=0> S; //total # of trips, not shifts!!
int<lower=1> id[S];//driver index, must be an array
int r_trip[S];//index of trip $r$
vector<lower=0>[S] t_trip_start;//trip start time
vector<lower=0>[S] t_trip_end;//trip end time
vector<lower=0>[N] event_time; //failure time
int group_size[S]; //group sizes
matrix[S, K] X_predictors;//predictor variable matrix
}
transformed data{
matrix[S, K] X_centered;
vector[K] X_means;
for(k0 in 1:K){
X_means[k0] = mean(X_predictors[, k0]);
X_centered[,k0] = X_predictors[, k0] - X_means[k0];
}
}
parameters{
real mu0; // hyperparameter
real<lower=0> sigma0;// hyperparameter
real<lower=0> beta;
real<lower=0, upper=1> kappa;
vector[K] R1_K; // fixed parameters for K predictors
vector[D] R0; // random intercept for D drivers
}
model{
int position = 1;
vector[S] theta_temp;
for (s0 in 1:S){
theta_temp[s0] = exp(R0[id[s0]] + X_centered[s0,]*R1_K);
}
for (s1 in 1:S){ // Likelihood estimation for JPLP based on trips, not shifts
if(group_size[s1] == 0){
target += jplpoevent_lp(t_trip_start[s1], t_trip_end[s1], r_trip[s1], beta, theta_temp[s1], kappa);
}else{
segment(event_time, position, group_size[s1]) ~ jplp_log(t_trip_start[s1], t_trip_end[s1], r_trip[s1], beta, theta_temp[s1], kappa);
position += group_size[s1];
}
}
//PRIORS
beta ~ gamma(1, 1);
kappa ~ uniform(0, 1);
R0 ~ normal(mu0, sigma0);
R1_K ~ normal(0, 10);
mu0 ~ normal(0, 10);
sigma0 ~ gamma(1, 1);
}
generated quantities{
real mu0_true = mu0 - dot_product(X_means, R1_K);
vector[D] R0_true = R0 - dot_product(X_means, R1_K);
//real theta_correct = theta_temp - dot_product(X_centered, R1_K);
}
'
fit_NHPP = stan("Stan/nhpp_plp_hierarchical.stan",
chains = 4, iter = 3000, warmup = 1000, data = df_NHPP$hier_dat)
SAMPLING FOR MODEL 'nhpp_plp_hierarchical' NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.000144 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.44 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1:
Chain 1:
Chain 1: Iteration: 1 / 3000 [ 0%] (Warmup)
Chain 1: Iteration: 300 / 3000 [ 10%] (Warmup)
Chain 1: Iteration: 600 / 3000 [ 20%] (Warmup)
Chain 1: Iteration: 900 / 3000 [ 30%] (Warmup)
Chain 1: Iteration: 1001 / 3000 [ 33%] (Sampling)
Chain 1: Iteration: 1300 / 3000 [ 43%] (Sampling)
Chain 1: Iteration: 1600 / 3000 [ 53%] (Sampling)
Chain 1: Iteration: 1900 / 3000 [ 63%] (Sampling)
Chain 1: Iteration: 2200 / 3000 [ 73%] (Sampling)
Chain 1: Iteration: 2500 / 3000 [ 83%] (Sampling)
Chain 1: Iteration: 2800 / 3000 [ 93%] (Sampling)
Chain 1: Iteration: 3000 / 3000 [100%] (Sampling)
Chain 1:
Chain 1: Elapsed Time: 1.30605 seconds (Warm-up)
Chain 1: 2.2562 seconds (Sampling)
Chain 1: 3.56225 seconds (Total)
Chain 1:
SAMPLING FOR MODEL 'nhpp_plp_hierarchical' NOW (CHAIN 2).
Chain 2:
Chain 2: Gradient evaluation took 7.8e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.78 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2:
Chain 2:
Chain 2: Iteration: 1 / 3000 [ 0%] (Warmup)
Chain 2: Iteration: 300 / 3000 [ 10%] (Warmup)
Chain 2: Iteration: 600 / 3000 [ 20%] (Warmup)
Chain 2: Iteration: 900 / 3000 [ 30%] (Warmup)
Chain 2: Iteration: 1001 / 3000 [ 33%] (Sampling)
Chain 2: Iteration: 1300 / 3000 [ 43%] (Sampling)
Chain 2: Iteration: 1600 / 3000 [ 53%] (Sampling)
Chain 2: Iteration: 1900 / 3000 [ 63%] (Sampling)
Chain 2: Iteration: 2200 / 3000 [ 73%] (Sampling)
Chain 2: Iteration: 2500 / 3000 [ 83%] (Sampling)
Chain 2: Iteration: 2800 / 3000 [ 93%] (Sampling)
Chain 2: Iteration: 3000 / 3000 [100%] (Sampling)
Chain 2:
Chain 2: Elapsed Time: 1.31365 seconds (Warm-up)
Chain 2: 2.22392 seconds (Sampling)
Chain 2: 3.53757 seconds (Total)
Chain 2:
SAMPLING FOR MODEL 'nhpp_plp_hierarchical' NOW (CHAIN 3).
Chain 3:
Chain 3: Gradient evaluation took 9.2e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.92 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3:
Chain 3:
Chain 3: Iteration: 1 / 3000 [ 0%] (Warmup)
Chain 3: Iteration: 300 / 3000 [ 10%] (Warmup)
Chain 3: Iteration: 600 / 3000 [ 20%] (Warmup)
Chain 3: Iteration: 900 / 3000 [ 30%] (Warmup)
Chain 3: Iteration: 1001 / 3000 [ 33%] (Sampling)
Chain 3: Iteration: 1300 / 3000 [ 43%] (Sampling)
Chain 3: Iteration: 1600 / 3000 [ 53%] (Sampling)
Chain 3: Iteration: 1900 / 3000 [ 63%] (Sampling)
Chain 3: Iteration: 2200 / 3000 [ 73%] (Sampling)
Chain 3: Iteration: 2500 / 3000 [ 83%] (Sampling)
Chain 3: Iteration: 2800 / 3000 [ 93%] (Sampling)
Chain 3: Iteration: 3000 / 3000 [100%] (Sampling)
Chain 3:
Chain 3: Elapsed Time: 1.36673 seconds (Warm-up)
Chain 3: 2.18768 seconds (Sampling)
Chain 3: 3.55441 seconds (Total)
Chain 3:
SAMPLING FOR MODEL 'nhpp_plp_hierarchical' NOW (CHAIN 4).
Chain 4:
Chain 4: Gradient evaluation took 0.0001 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4:
Chain 4:
Chain 4: Iteration: 1 / 3000 [ 0%] (Warmup)
Chain 4: Iteration: 300 / 3000 [ 10%] (Warmup)
Chain 4: Iteration: 600 / 3000 [ 20%] (Warmup)
Chain 4: Iteration: 900 / 3000 [ 30%] (Warmup)
Chain 4: Iteration: 1001 / 3000 [ 33%] (Sampling)
Chain 4: Iteration: 1300 / 3000 [ 43%] (Sampling)
Chain 4: Iteration: 1600 / 3000 [ 53%] (Sampling)
Chain 4: Iteration: 1900 / 3000 [ 63%] (Sampling)
Chain 4: Iteration: 2200 / 3000 [ 73%] (Sampling)
Chain 4: Iteration: 2500 / 3000 [ 83%] (Sampling)
Chain 4: Iteration: 2800 / 3000 [ 93%] (Sampling)
Chain 4: Iteration: 3000 / 3000 [100%] (Sampling)
Chain 4:
Chain 4: Elapsed Time: 1.33248 seconds (Warm-up)
Chain 4: 2.24589 seconds (Sampling)
Chain 4: 3.57837 seconds (Total)
Chain 4:
fit_JPLP = stan("Stan/jplp_hierarchical.stan",
chains = 4, iter = 3000, warmup = 1000, data = df_JPLP$stan_dt)
SAMPLING FOR MODEL 'jplp_hierarchical' NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.000227 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.27 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1:
Chain 1:
Chain 1: Iteration: 1 / 3000 [ 0%] (Warmup)
Chain 1: Iteration: 300 / 3000 [ 10%] (Warmup)
Chain 1: Iteration: 600 / 3000 [ 20%] (Warmup)
Chain 1: Iteration: 900 / 3000 [ 30%] (Warmup)
Chain 1: Iteration: 1001 / 3000 [ 33%] (Sampling)
Chain 1: Iteration: 1300 / 3000 [ 43%] (Sampling)
Chain 1: Iteration: 1600 / 3000 [ 53%] (Sampling)
Chain 1: Iteration: 1900 / 3000 [ 63%] (Sampling)
Chain 1: Iteration: 2200 / 3000 [ 73%] (Sampling)
Chain 1: Iteration: 2500 / 3000 [ 83%] (Sampling)
Chain 1: Iteration: 2800 / 3000 [ 93%] (Sampling)
Chain 1: Iteration: 3000 / 3000 [100%] (Sampling)
Chain 1:
Chain 1: Elapsed Time: 4.30574 seconds (Warm-up)
Chain 1: 7.40268 seconds (Sampling)
Chain 1: 11.7084 seconds (Total)
Chain 1:
SAMPLING FOR MODEL 'jplp_hierarchical' NOW (CHAIN 2).
Chain 2:
Chain 2: Gradient evaluation took 0.000204 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 2.04 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2:
Chain 2:
Chain 2: Iteration: 1 / 3000 [ 0%] (Warmup)
Chain 2: Iteration: 300 / 3000 [ 10%] (Warmup)
Chain 2: Iteration: 600 / 3000 [ 20%] (Warmup)
Chain 2: Iteration: 900 / 3000 [ 30%] (Warmup)
Chain 2: Iteration: 1001 / 3000 [ 33%] (Sampling)
Chain 2: Iteration: 1300 / 3000 [ 43%] (Sampling)
Chain 2: Iteration: 1600 / 3000 [ 53%] (Sampling)
Chain 2: Iteration: 1900 / 3000 [ 63%] (Sampling)
Chain 2: Iteration: 2200 / 3000 [ 73%] (Sampling)
Chain 2: Iteration: 2500 / 3000 [ 83%] (Sampling)
Chain 2: Iteration: 2800 / 3000 [ 93%] (Sampling)
Chain 2: Iteration: 3000 / 3000 [100%] (Sampling)
Chain 2:
Chain 2: Elapsed Time: 3.96027 seconds (Warm-up)
Chain 2: 7.68066 seconds (Sampling)
Chain 2: 11.6409 seconds (Total)
Chain 2:
SAMPLING FOR MODEL 'jplp_hierarchical' NOW (CHAIN 3).
Chain 3:
Chain 3: Gradient evaluation took 0.000191 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.91 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3:
Chain 3:
Chain 3: Iteration: 1 / 3000 [ 0%] (Warmup)
Chain 3: Iteration: 300 / 3000 [ 10%] (Warmup)
Chain 3: Iteration: 600 / 3000 [ 20%] (Warmup)
Chain 3: Iteration: 900 / 3000 [ 30%] (Warmup)
Chain 3: Iteration: 1001 / 3000 [ 33%] (Sampling)
Chain 3: Iteration: 1300 / 3000 [ 43%] (Sampling)
Chain 3: Iteration: 1600 / 3000 [ 53%] (Sampling)
Chain 3: Iteration: 1900 / 3000 [ 63%] (Sampling)
Chain 3: Iteration: 2200 / 3000 [ 73%] (Sampling)
Chain 3: Iteration: 2500 / 3000 [ 83%] (Sampling)
Chain 3: Iteration: 2800 / 3000 [ 93%] (Sampling)
Chain 3: Iteration: 3000 / 3000 [100%] (Sampling)
Chain 3:
Chain 3: Elapsed Time: 3.81346 seconds (Warm-up)
Chain 3: 6.94023 seconds (Sampling)
Chain 3: 10.7537 seconds (Total)
Chain 3:
SAMPLING FOR MODEL 'jplp_hierarchical' NOW (CHAIN 4).
Chain 4:
Chain 4: Gradient evaluation took 0.000413 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 4.13 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4:
Chain 4:
Chain 4: Iteration: 1 / 3000 [ 0%] (Warmup)
Chain 4: Iteration: 300 / 3000 [ 10%] (Warmup)
Chain 4: Iteration: 600 / 3000 [ 20%] (Warmup)
Chain 4: Iteration: 900 / 3000 [ 30%] (Warmup)
Chain 4: Iteration: 1001 / 3000 [ 33%] (Sampling)
Chain 4: Iteration: 1300 / 3000 [ 43%] (Sampling)
Chain 4: Iteration: 1600 / 3000 [ 53%] (Sampling)
Chain 4: Iteration: 1900 / 3000 [ 63%] (Sampling)
Chain 4: Iteration: 2200 / 3000 [ 73%] (Sampling)
Chain 4: Iteration: 2500 / 3000 [ 83%] (Sampling)
Chain 4: Iteration: 2800 / 3000 [ 93%] (Sampling)
Chain 4: Iteration: 3000 / 3000 [100%] (Sampling)
Chain 4:
Chain 4: Elapsed Time: 4.09818 seconds (Warm-up)
Chain 4: 7.16803 seconds (Sampling)
Chain 4: 11.2662 seconds (Total)
Chain 4:
est_NHPP = broom::tidy(fit_NHPP, conf.int = T, rhat = T, ess = T)
est_NHPP
# A tibble: 27 x 7
term estimate std.error conf.low conf.high rhat ess
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
1 mu0 2.01 0.206 1.60 2.41 1.00 1346
2 sigma0 0.558 0.176 0.308 0.994 1.00 1103
3 beta 1.26 0.0803 1.11 1.42 1.00 1162
4 R1_K[1] 0.938 0.0933 0.765 1.13 1.00 1110
5 R1_K[2] 0.273 0.0734 0.136 0.426 0.999 1798
6 R1_K[3] 0.212 0.0497 0.115 0.309 1.00 1822
7 R0[1] 1.63 0.127 1.39 1.89 0.999 1282
8 R0[2] 1.68 0.144 1.40 1.96 0.999 1826
9 R0[3] 2.90 0.303 2.36 3.52 1.00 1483
10 R0[4] 1.94 0.183 1.59 2.30 1.00 2608
# … with 17 more rows
t_NHPP = stan_trace(fit_NHPP, pars = c('mu0', 'sigma0', 'beta'), ncol = 1)
t_NHPP
est_JPLP = broom::tidy(fit_JPLP, conf.int = T, rhat = T, ess = T)
est_JPLP
# A tibble: 28 x 7
term estimate std.error conf.low conf.high rhat ess
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
1 mu0 2.12 0.227 1.69 2.58 1.00 4288
2 sigma0 0.536 0.190 0.262 0.993 1.00 4814
3 beta 1.19 0.117 0.969 1.42 1.00 2927
4 kappa 0.801 0.0780 0.650 0.955 1.00 2693
5 R1_K[1] 1.00 0.134 0.768 1.29 1.00 2725
6 R1_K[2] 0.220 0.0911 0.0528 0.407 1.00 6879
7 R1_K[3] 0.238 0.0701 0.108 0.385 1.00 5374
8 R0[1] 1.77 0.185 1.42 2.15 1.00 4215
9 R0[2] 2.01 0.216 1.60 2.45 1.00 4610
10 R0[3] 2.58 0.299 2.06 3.21 1.00 3819
# … with 18 more rows
t_JPLP = stan_trace(fit_JPLP, pars = c('mu0', 'sigma0', 'beta', 'kappa'), ncol = 1)
t_JPLP
knitr::include_graphics("Figures/Aim3_trace_plot.jpeg")
sessionInfo()
R version 3.6.2 (2019-12-12)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods
[7] base
other attached packages:
[1] broom_0.5.2 rstan_2.19.2 StanHeaders_2.18.1-10
[4] ggplot2_3.3.0.9000 dplyr_0.8.3
loaded via a namespace (and not attached):
[1] tidyselect_0.2.5 xfun_0.12 purrr_0.3.3
[4] lattice_0.20-38 colorspace_1.4-1 generics_0.0.2
[7] vctrs_0.2.2 htmltools_0.4.0 stats4_3.6.2
[10] loo_2.1.0 yaml_2.2.0 utf8_1.1.4
[13] rlang_0.4.4 pkgbuild_1.0.6 pillar_1.4.3
[16] glue_1.3.1 withr_2.1.2 matrixStats_0.54.0
[19] lifecycle_0.1.0 stringr_1.4.0 munsell_0.5.0
[22] gtable_0.3.0 codetools_0.2-16 evaluate_0.14
[25] labeling_0.3 inline_0.3.15 knitr_1.23
[28] callr_3.4.2 ps_1.3.2 parallel_3.6.2
[31] fansi_0.4.1 Rcpp_1.0.3 scales_1.1.0
[34] backports_1.1.5 farver_2.0.3 gridExtra_2.3
[37] distill_0.8 digest_0.6.24 stringi_1.4.5
[40] processx_3.4.2 grid_3.6.2 cli_2.0.1
[43] tools_3.6.2 magrittr_1.5 tibble_2.1.3
[46] crayon_1.3.4 tidyr_1.0.0 pkgconfig_2.0.3
[49] prettyunits_1.1.1 assertthat_0.2.1 rmarkdown_2.1
[52] rstudioapi_0.11 R6_2.4.1 nlme_3.1-142
[55] compiler_3.6.2